Many directions are possible for future work. We are
interested in exploring the entire interaction spectrum. One
end of the spectrum corresponds to fully interactive control.
The other end corresponds to passive viewing without
interaction. In-between, various levels of constrained navi-
gation and guidance are worth exploring. The incorporation
of artificial speech technology also suggests to exploit the
opposite direction: parsing a human speech and letting the
spectator’s words influence the interactive experience or, in
our case, the narrative of the scientific documentary.
ACKNOWLEDGMENTS
The authors would like to thank Nanographics GmbH
(nanographics.at) for providing Marion.
REFERENCES
[1] H. Akiba, C. Wang, and K.-L. Ma, “AniViz: A template-based
animation tool for volume visualization,” IEEE Comput. Graph.
Appl., vol. 30, no. 5, pp. 61–71, Sep./Oct. 2010, doi: 10.1109/
MCG.2009.107.
[2] F. Amini, N. Henry Riche , B. Lee, C. Hurter, and P. Irani,
“Understanding data videos: Looking at narrative visualization
through the cinematography lens,” in Proc. Conf. Hum. Factors
Comput. Syst., New York, NY, USA, 2015, pp. 1459–1468,
doi: 10.1145/2702123.2702431.
[3] L. Autin et al., “Mesoscope: A web-based tool for mesoscale data
integration and curation,” in Proc. Workshop Mol. Graph. Vis. Anal.
Mol. Data, Goslar, Germany, 2020, pp. 23–31, doi: 10.2312/
molva.20201098.
[4] A
. Birkeland, S. Bruckner, A. Brambilla, and I. Viola, “Illustrative
membrane clipping,” Comput. Graph. Forum, vol. 31, no. 3,
pp. 905–914, 2012, doi: 10.1111/j.1467–8659.2012.03083.x.
[5] J. Blinn, “Where am I? What am I looking at?,” IEEE Comput. Graph.
Appl., vol. 8, no. 4, pp. 76–81, Jul. 1988, doi: 10.1109/38.7751.
[6] A. Bock et al., “OpenSpace: A system for astrographics,” IEEE
Trans. Vis. Comput. Graph., vol. 26, no. 1, pp. 633–642, Jan. 2020,
doi: 10.1109/TVCG.2019.2934259.
[7] N. Burtnyk, A. Khan, G. Fitzmaurice, R. Balakrishnan, and G. Kur-
tenbach, “StyleCam: Interactive stylized 3D navigation using inte-
grated spatial & temporal controls,” in Proc. Annu. ACM Symp. User
Interface Softw. Technol., New York, NY, USA, 2002, pp. 101–110,
doi: 10.1145/571985.572000.
[8] N. Burtnyk, A. Khan, G. Fitzmaurice, and G. Kurtenbach,
“ShowMotion: Camera motion based 3D design review,” in Proc.
Symp. Interact. 3D Graph. Games, New York, NY, USA, 2006,
pp. 167–174, doi: 10.1145/1111411.1111442.
[9] D. Ceneda et al., “Characterizing guidance in visual analytics,”
IEEE Trans. Vis. Comput. Graph., vol. 23, no. 1, pp. 111–120, Jan.
2017, doi: 10.1109/TVCG.2016.2598468.
[10] M. Christie, R. Machap, J.-M. Normand, P. Olivier, and J. Picker-
ing, “Virtual camera planning: A survey,” in Proc. Smart Graph.,
Berlin, Germany, 2005, pp. 40–52, doi: 10.1007/11536482_4.
[11] M. Christie, P. Olivier, and J.-M. Normand, “Camera control
in computer graphics,” Comput.Graph.Forum,vol.27,no.8,
pp. 2197–2218, 2008, doi: 10.1111/j.1467–8659.2008.01181.x.
[12] J. M. Clark and A. Paivio, “Dual coding theory and education,”
Edu. Psychol. Rev., vol. 3, no. 3, pp. 149–210, 1991, doi: 10.1007/
BF01320076.
[13] Q. Company, “Qt speech,” Accessed: Jul. 2020. [Online]. Avail-
able: https://doc.qt.io/qt-5/qtspeech-index.html
[14] C. Daly, L. Clunie, and M. Ma, “From microscope to movies: 3D
animations for teaching physiology,” Microsci. Anal., vol. 28, no. 6,
pp. 7–10, 2014.
[15] N. Elmqvist and P. Tsigas, “A taxonomy of 3D occlusion manage-
ment for visualization,” IEEE Trans. Vis. Comput. Graph., vol. 14,
no. 5, pp. 1095–1109, Sep./Oct. 2008, doi: 10.1109/TVCG.2008.59.
[16] T. Fujiwara, T. Crnovrsanin, and K.-L. Ma, “Concise provenance
of interactive network analysis,” Vis. Informat.,vol.2,no.4,
pp. 213–224, 2018, doi: 10.1016/j.visinf.2018.12.002.
[17] Q. Galvane et al., “Directing cinematographic drones,” ACM Trans.
Graph., vol. 37, no. 3, pp. 34:1–34:18, 2018, doi: 10.1145/3181975.
[18] N. Gershon and W. Page, “What storytelling can do for informa-
tion visualization,” Commun. ACM, vol. 44, no. 8, pp. 31–37, 2001,
doi: 10.1145/381641.381653.
[19] A. Glassner, “Interactive storytelling: People , stories, and
games,” in Proc.Int.Conf.Comput.Vis.Syst., Berlin, Germany,
2001, pp. 51–60, doi: 10.1007/3–540-45420-9_7.
[20] Google, “Cloud text-to-speech API,” Accessed: J ul. 2020.
[On line]. Available: https://cloud.google.com/text-to-speech/
docs/reference/rest/
[21] S. Gratzl, A. Lex, N. Gehlenborg, N. Cosgrove, and M. Streit, “From
visual exploration to storytelling and back again,” Comput. Graph.
Forum, vol. 35, no. 3, pp. 491–500, 2016, doi: 10.1111/cgf.12925.
[22] A. J. Hanson, E. A. Wernert, and S. B. Hughes, “Constrained navi-
gation environments,” in Proc. Dagstuhl Scientific Vis. Conf., 1997,
pp. 95–95, doi: 10.1109/DAGSTUHL.1997.10024.
[23] G. H
€
ost, K. Palmerius, and K. Sch
€
onborn, “Nano for the pub-
lic: An exploranation perspective,” IEEE Comput. Graph. Appl.,
vol. 40, no. 2, pp. 32–42, Mar./Apr. 2020, doi: 10.1109/
MCG.2020.2973120.
[24] J. Hullman and N. Diakopoulos, “Visualization rhetoric: Fram-
ing effects in narrative visualization,” IEEE Trans. Vis. Comput.
Graph., vol. 17, no. 12, pp. 2231–2240, Dec. 2011, doi: 10.1109/
TVCG.2011.255.
[25] J. Iwasa, “Crafting a career in molecular animation,” Mol. Biol. Cell,
vol. 25, no. 19, pp. 2891–2893, 2014, doi: 10.1091/mbc.e14–01-0699.
[26] G. T. Johnson, L. Autin, M. Al-Alusi , D. S. Goodsell, M. F. Sanner,
and A. J. Olson, “cellPACK: A virtual mesoscope to model and
visualize structural systems biology,” Nat. Methods, vol. 12, no. 1,
pp. 85–91, 2015, doi: 10.1038/nmeth.3204.
[27] G. T. Johnson, D. S. Goodsell, L. Autin, S. Forli, M. F. Sanner, and
A. J. Olson, “ 3D molecular models of whole HIV-1 virions gener-
ated with cellPACK,” Faraday Discuss, vol. 169, pp. 23–44, 2014,
doi: 10.1039/c4fd00017j.
[28] M. Kanehisa and S. Goto, “KEGG: Kyoto encyclopedia of genes
and genomes,” Nucleic Acids Res., vol. 28, no. 1, pp. 27–30, 2000,
doi: 10.1093/nar/28.1.27.
[29] R. Karpe, “A survey :On text to speech synthesis,” Int. J. Res. Appl.
Sci. Eng. Technol., vol. 6, no. 03, pp. 351–355, 2018, doi: 10.22214/
ijraset.2018.3054.
[30] R. Kosara and J. Mackinlay, “Storytelling: The next step for visual-
ization,” IEEE Comput., vol. 46, no. 5, pp. 44–50, May 2013,
doi: 10.1109/MC.2013.36.
[31] D. Kou
ril, T. Isenberg, B. Kozl
ıkov
a, M. Meyer, E. Gr
€
oller, and
I. Viola, “HyperLabels: Browsing of dense and hierarchical molec-
ular 3D models,” IEEE Trans. Vis. Comput. Graph., vol. 27, no. 8,
pp. 3493–3504, Aug. 2021, doi: 10.1109/TVCG.2020.2975583.
[32] B. C. Kwon, F. Stoffel, D. J
€
ackle, B. Lee, and D. Keim, “VisJockey:
Enriching data stories through orchestrated interactive visual-
ization,” in Proc. Comput. Journalism Symp., New York, NY, USA,
2014.
[33] M. Le Muzic , P. Mindek, J. Sorger, L. Autin, D. Goodsell, and
I. Viola, “Visibility equalizer: Cutaway visualization of mesoscopic
biological models,” Comput. Graph. Forum, vol. 35, no. 3, pp. 161–170,
2016, doi: 10.1111/cgf.12892.
[34] B. Lee, N. H. Riche, P. Isenberg, and S. Carpendale, “More than
telling a story: Transforming data into visually shared stories,”
IEEE Comput. Graph. Appl., vol. 35, no. 5, pp. 84–90, Sep./Oct.
2015, doi: 10.1109/MCG.2015.99.
[35] W. Li, L. Ritter, M. Agrawala, B. Curless, and D. Salesin,
“Interactive cutaway illustrations of complex 3D models,” ACM
Trans. Graph.
, vol. 26, no. 3, pp. 31:1–31:11, 2000, doi: 10.1145/
1276377.1276416.
[36] I. Liao, W.-H. Hsu, and K.-L. Ma, “Storytelling via navigation: A
novel approach to animation for scientific visualization,” in Proc.
Smart Graph., Cham, Switzerland, 2014, pp. 1–14, doi: 10.1007/
978–3-319-11650-1_1.
[37] E. M. Lidal, H. Hauser, and I. Viola, “Geological storytelling –
Graphically exploring and communicating geological sketches,”
in Proc. Sketch-Based Interfaces Model., Goslar, Germany, 2012,
pp. 11–20, doi: 10.2312/SBM/SBM12/011–020.
[38] C. Lino, M. Christie, F. Lamarche, G. Schofield, and P. Olivier, “A
real-time cinematography system for interactive 3D environ-
ments,” in Proc. Int. Conf. Smart City Appl., Goslar, Germany, 2010,
pp. 139–148, doi: 10.2312/SCA/SCA10/139–148.
[39] W. E. Lorensen, “Geometric clipping using boolean textures,” in
Proc. Vis., 1993, pp. 268–274., doi: 10.1109/VISUAL.1993.398878.
KOU
RIL ET AL.: MOLECUMENTARY: ADAPTABLE NARRATED DOCUMENTARIES USING MOLECULAR VISUALIZATION 1745